A New Ranking Scheme for Decontaminate Classified Clustering Datasets

AUTHORS

Mukiri Ratna Raju,Department of Computer Science and Engineering, St. Ann’s College of Engineering & Technology, Chirala – Prakasam, Andhra Pradesh, India

ABSTRACT

Purification is the procedure of securely discover delicate capacity system successfully reestablishing the system to a state as though the touchy information had never been put away. Peril indicates purging could require destroying all unreferenced squares. Privacy-ensuring releasing of complex data addresses a long-standing test for the data mining research gathering. As a result of rich semantics of the data prior finding out about the examination task, over the best purifying is much of the time critical to ensure privacy, inciting significant loss of the data utility. The proposal perceives little customers and makes new exact model and sent obstruction on convenient datasets is relatively unaltered while working without attacks. Despite existing frameworks Map Reduce approach is furthermore participated in this paper which makes this work unfathomably reasonable for Map Reduce condition. Stamp flipping ambushes is remarkable hurting, where the attacker can control the names designated to a little measure of the planning centers. A proficient computation to perform perfect name flipping hurting strikes and tried and true suspicious data centers directing the effect of such hurting ambushes.

 

KEYWORDS

poisoning attacks, label flipping attacks, Data Anonymization, K-means algorithm.

REFERENCES

[1] X. Wu, X. Zhu, G.-Q. Wu, and W. Ding, “Data mining with big data,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 1, pp. 97–107, (2014).(CrossRef)(Google Scholar)
[2] Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, D. Precup and Y. W. Teh, Eds., vol. 70 of Proceedings of Machine Learning Research, PMLR, pp. 214–223. (2017)
[3] Beaulieu-Jones, B. K., Wu, Z. S., Williams, C., and Greene, C. S. Privacy preserving generative deep neural networks support clinical data sharing. bioRxiv (2017).
[4] Beimel, A., Brenner, H., Kasiviswanathan, S. P., and Nissim, K. Bounds on the sample complexity for private learning and private data release. Machine Learning 94, 3 (Mar 2014), 401–437 (2014).
[5] Chaudhuri, K., Monteleoni, C., and Sarwate, A. D. Differentially private empirical risk minimization. Journal of Machine Learning Research 12, Mar 2011, 1069–1109 (2011) .
[6] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems (2016), pp. 2172–2180.
[7] Donahue, J., Krähenbühl, P., and Darrell, T. Adversarial feature learning. CoRR abs/1605.09782 (2016).
[8] Dwork, C. Differential privacy. In Proceedings of the 33rd International Conference on Automata, Languages and Programming - Volume Part II (2006), ICALP’06, pp. 1–12.
[9] Dwork, C. The differential privacy frontier (extended abstract). In Proceedings of the 6th Theory of Cryptography Conference on Theory of Cryptography (2009), TCC ’09, pp. 496–502.
[10] Dwork, C., and Roth, A. foundations of differential privacy. Found. Trends Theor.Comput. Sci. 9, 3–4 (2014), 211–407
[11] A. Adya, W. Bolosky, M. Castro, G. Cermak, R. Chaiken, J. Douceur, J. Howell, J. Lorch, M. Theimer, and R. Wattenhofer. Farsite: Federated, available, and reliable storage for an incompletely trusted environment. ACM SIGOPS Operating Systems Review, 36(SI):1–14, (2002). (CrossRef)(Google Scholar)
[12] D. Belazzougui, F. C. Botelho, and M. Dietzfelbinger. Hash, displace, and compress. In Proceedings of the 17th Annual European Symposium on Algorithms, ESA’09, pages 682–693, (2009).
[13] M. A. Bender, M. Farach-Colton, R. Johnson, R. Kraner, B. C. Kuszmaul, D. Medjedovic, P. Montes, P. Shetty, R. P. Spillane, and E. Zadok. Don’t thrash: How to cache your hash on flash. In Proceedings of the 38th International Conference on Very Large Data Bases, (2012)(CrossRef)(Google Scholar)
[14] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar. Can machine learning be secure? In Symposium on Information, computer and communications security, pages 16–25. ACM, (2006).
[15] B. Biggio, I. Corona, G. Fumera, G. Giacinto, and F. Roli. Bagging classifiers for fighting poisoning attacks in adversarial classification tasks. In Multiple Classifier Systems, pages 350–359. Springer, (2011).(CrossRef)(Google Scholar)
[16] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndic,´ P. Laskov, G. Giacinto, and F. Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases, pages 387–402. Springer, (2013).(CrossRef)(Google Scholar)
[17] B. C. M. Fung, K. Wang, R. Chen, and P. S. Yu, “Privacy preserving data publishing: A survey of recentdevelopments,” ACM compute. Survey, vol. 42,no. 4, pp. 1–53, (2010). (CrossRef)(Google Scholar)

CITATION

  • APA:
    Ratna Raju,M.(2018). A New Ranking Scheme for Decontaminate Classified Clustering Datasets. International Journal of Advanced Research in Big Data Management System, 2(2), . http://dx.doi.org/10.21742/IJARBMS.2018.2.2.04
  • Harvard:
    Ratna Raju,M.(2018). "A New Ranking Scheme for Decontaminate Classified Clustering Datasets". International Journal of Advanced Research in Big Data Management System, 2(2), pp.. doi:http://dx.doi.org/10.21742/IJARBMS.2018.2.2.04
  • IEEE:
    [1]M.Ratna Raju, "A New Ranking Scheme for Decontaminate Classified Clustering Datasets". International Journal of Advanced Research in Big Data Management System, vol.2, no.2, pp., Dec. 2018
  • MLA:
    Ratna Raju Mukiri. "A New Ranking Scheme for Decontaminate Classified Clustering Datasets". International Journal of Advanced Research in Big Data Management System, vol.2, no.2, Dec. 2018, pp., doi:http://dx.doi.org/10.21742/IJARBMS.2018.2.2.04

ISSUE INFO

  • Volume 2, No. 2, 2018
  • ISSN(p):2208-1674
  • ISSN(o):2208-1682
  • Published:Dec. 2018

DOWNLOAD